19 research outputs found

    Stochastic Reward Net-based Modeling Approach for Availability Quantification of Data Center Systems

    Get PDF
    Availability quantification and prediction of IT infrastructure in data centers are of paramount importance for online business enterprises. In this chapter, we present comprehensive availability models for practical case studies in order to demonstrate a state-space stochastic reward net model for typical data center systems for quantitative assessment of system availability. We present stochastic reward net models of a virtualized server system, a data center network based on DCell topology, and a conceptual data center for disaster tolerance. The systems are then evaluated against various metrics of interest, including steady state availability, downtime and downtime cost, and sensitivity analysis

    A Hierarchical Modeling and Analysis Framework for Availability and Security Quantification of IoT Infrastructures

    No full text
    Modeling a complete Internet of Things (IoT) infrastructure is crucial to assess its availability and security characteristics. However, modern IoT infrastructures often consist of a complex and heterogeneous architecture and thus taking into account both architecture and operative details of the IoT infrastructure in a monolithic model is a challenge for system practitioners and developers. In that regard, we propose a hierarchical modeling framework for the availability and security quantification of IoT infrastructures in this paper. The modeling methodology is based on a hierarchical model of three levels including (i) reliability block diagram (RBD) at the top level to capture the overall architecture of the IoT infrastructure, (ii) fault tree (FT) at the middle level to elaborate system architectures of the member systems in the IoT infrastructure, and (iii) continuous time Markov chain (CTMC) at the bottom level to capture detailed operative states and transitions of the bottom subsystems in the IoT infrastructure. We consider a specific case-study of IoT smart factory infrastructure to demonstrate the feasibility of the modeling framework. The IoT smart factory infrastructure is composed of integrated cloud, fog, and edge computing paradigms. A complete hierarchical model of RBD, FT, and CTMC is developed. A variety of availability and security measures are computed and analyzed. The investigation of the case-study’s analysis results shows that more frequent failures in cloud cause more severe decreases of overall availability, while faster recovery of edge enhances the availability of the IoT smart factory infrastructure. On the other hand, the analysis results of the case-study also reveal that cloud servers’ virtual machine monitor (VMM) and virtual machine (VM), and fog server’s operating system (OS) are the most vulnerable components to cyber-security attack intensity. The proposed modeling and analysis framework coupled with further investigation on the analysis results in this study help develop and operate the IoT infrastructure in order to gain the highest values of availability and security measures and to provide development guidelines in decision-making processes in practice

    Hardware-In-the-loop simulation platform for the design, testing and validation of autonomous control system for unmanned underwater vehicle

    No full text
    575-580Significant advances in various relevant science and engineering disciplines have propelled the development of more advanced, yet reliable and practical underwater vehicles. A great array of vehicle types and applications has been produced along with a wide range of innovative approaches for enhancing the performance of unmanned underwater vehicle (UUV). These recent advances enable the extension of UUVsā€™ flight envelope comparable to that of manned vehicles. For undertaking longer missions, therefore more advanced control and navigation will be required to maintain an accurate position over larger operational envelope particularly when a close proximity to obstacles (such as manned vehicles, pipelines, underwater structures) is involved. In this case, a sufficiently good model is prerequisite of control system design. System evaluation and testing of unmanned underwater vehicles in certain environment can be tedious, time consuming and expensive. This paper, focused on developing dynamic model of UUV for the purpose of guidance and control. Along with this a HILS (Hardware-In-the-Loop Simulation) based novel framework for rapid construction of testing scenarios with embedded systems has been investigated. The modeling approach is implemented for the AUV Squid, an autonomous underwater vehicle that was designed, developed and tested by research team at Center for Unmanned System Studies at InstitutTeknologi Bandung

    Multiagent Reinforcement Learning Based on Fusion-Multiactor-Attention-Critic for Multiple-Unmanned-Aerial-Vehicle Navigation Control

    No full text
    The proliferation of unmanned aerial vehicles (UAVs) has spawned a variety of intelligent services, where efficient coordination plays a significant role in increasing the effectiveness of cooperative execution. However, due to the limited operational time and range of UAVs, achieving highly efficient coordinated actions is difficult, particularly in unknown dynamic environments. This paper proposes a multiagent deep reinforcement learning (MADRL)-based fusion-multiactor-attention-critic (F-MAAC) model for multiple UAVs’ energy-efficient cooperative navigation control. The proposed model is built on the multiactor-attention-critic (MAAC) model, which offers two significant advances. The first is the sensor fusion layer, which enables the actor network to utilize all required sensor information effectively. Next, a layer that computes the dissimilarity weights of different agents is added to compensate for the information lost through the attention layer of the MAAC model. We utilize the UAV LDS (logistic delivery service) environment created by the Unity engine to train the proposed model and verify its energy efficiency. The feature that measures the total distance traveled by the UAVs is incorporated with the UAV LDS environment to validate the energy efficiency. To demonstrate the performance of the proposed model, the F-MAAC model is compared with several conventional reinforcement learning models with two use cases. First, we compare the F-MAAC model to the DDPG, MADDPG, and MAAC models based on the mean episode rewards for 20k episodes of training. The two top-performing models (F-MAAC and MAAC) are then chosen and retrained for 150k episodes. Our study determines the total amount of deliveries done within the same period and the total amount done within the same distance to represent energy efficiency. According to our simulation results, the F-MAAC model outperforms the MAAC model, making 38% more deliveries in 3000 time steps and 30% more deliveries per 1000 m of distance traveled

    Multiagent Reinforcement Learning Based on Fusion-Multiactor-Attention-Critic for Multiple-Unmanned-Aerial-Vehicle Navigation Control

    No full text
    The proliferation of unmanned aerial vehicles (UAVs) has spawned a variety of intelligent services, where efficient coordination plays a significant role in increasing the effectiveness of cooperative execution. However, due to the limited operational time and range of UAVs, achieving highly efficient coordinated actions is difficult, particularly in unknown dynamic environments. This paper proposes a multiagent deep reinforcement learning (MADRL)-based fusion-multiactor-attention-critic (F-MAAC) model for multiple UAVsā€™ energy-efficient cooperative navigation control. The proposed model is built on the multiactor-attention-critic (MAAC) model, which offers two significant advances. The first is the sensor fusion layer, which enables the actor network to utilize all required sensor information effectively. Next, a layer that computes the dissimilarity weights of different agents is added to compensate for the information lost through the attention layer of the MAAC model. We utilize the UAV LDS (logistic delivery service) environment created by the Unity engine to train the proposed model and verify its energy efficiency. The feature that measures the total distance traveled by the UAVs is incorporated with the UAV LDS environment to validate the energy efficiency. To demonstrate the performance of the proposed model, the F-MAAC model is compared with several conventional reinforcement learning models with two use cases. First, we compare the F-MAAC model to the DDPG, MADDPG, and MAAC models based on the mean episode rewards for 20k episodes of training. The two top-performing models (F-MAAC and MAAC) are then chosen and retrained for 150k episodes. Our study determines the total amount of deliveries done within the same period and the total amount done within the same distance to represent energy efficiency. According to our simulation results, the F-MAAC model outperforms the MAAC model, making 38% more deliveries in 3000 time steps and 30% more deliveries per 1000 m of distance traveled
    corecore